Národní úložiště šedé literatury Nalezeno 4 záznamů.  Hledání trvalo 0.01 vteřin. 
Visual Question Answering
Kocurek, Pavel ; Ondřej, Karel (oponent) ; Fajčík, Martin (vedoucí práce)
Visual Question Answering (VQA) is a system where an image and a question are used as input and the output is an answer. Despite many research advances, unlike image captioning, VQA is rarely used in practice. This work aims to narrow the gap between research and practice. To examine the possibility of using VQA by blind and visually impaired people, this thesis proposes a demonstrative VQA application and then, a smartphone application. The study with 20 participants from the community was conducted. Firstly, the participants received an application for two weeks. Then, each of them was asked to fill out the questionnaire. 80 % of respondents rated the accuracy of VQA application as sufficient or better and most of them would appreciate it if their image captioning application also supported VQA. Following this discovery, this work tries to establish the link between image captioning and VQA. In particular, the work studies the informativeness provided by both systems in different scenarios. It collects a novel dataset of 111 images with manually annotated captions and diverse scenes. An experiment comparing obtained knowledge showed a success rate of 69.9 % and 46.2 % for VQA and image captioning, respectively. In another experiment 70.9 % of the time, participants were able to select the correct caption based on VQA. The results suggest that VQA outperforms image captioning regarding image details, therefore should be used in practice more often.
Image Captioning with Recurrent Neural Networks
Kvita, Jakub ; Španěl, Michal (oponent) ; Hradiš, Michal (vedoucí práce)
In this work I deal with automatic generation of image captions by using multiple types of neural networks. Thesis is based on the papers from MS COCO Captioning Challenge 2015 and character language models, popularized by A. Karpathy. Proposed model is combination of convolutional and recurrent neural network with encoder--decoder architecture. Vector representing encoded image is passed to language model as memory values of LSTM layers in the network. This work investigate, whether model with such simple architecture is able to generate captions and how good it is in comparison to other contemporary solutions. One of the results is that the proposed architecture is not sufficient for any image captioning task.
Visual Question Answering
Kocurek, Pavel ; Ondřej, Karel (oponent) ; Fajčík, Martin (vedoucí práce)
Visual Question Answering (VQA) is a system where an image and a question are used as input and the output is an answer. Despite many research advances, unlike image captioning, VQA is rarely used in practice. This work aims to narrow the gap between research and practice. To examine the possibility of using VQA by blind and visually impaired people, this thesis proposes a demonstrative VQA application and then, a smartphone application. The study with 20 participants from the community was conducted. Firstly, the participants received an application for two weeks. Then, each of them was asked to fill out the questionnaire. 80 % of respondents rated the accuracy of VQA application as sufficient or better and most of them would appreciate it if their image captioning application also supported VQA. Following this discovery, this work tries to establish the link between image captioning and VQA. In particular, the work studies the informativeness provided by both systems in different scenarios. It collects a novel dataset of 111 images with manually annotated captions and diverse scenes. An experiment comparing obtained knowledge showed a success rate of 69.9 % and 46.2 % for VQA and image captioning, respectively. In another experiment 70.9 % of the time, participants were able to select the correct caption based on VQA. The results suggest that VQA outperforms image captioning regarding image details, therefore should be used in practice more often.
Image Captioning with Recurrent Neural Networks
Kvita, Jakub ; Španěl, Michal (oponent) ; Hradiš, Michal (vedoucí práce)
In this work I deal with automatic generation of image captions by using multiple types of neural networks. Thesis is based on the papers from MS COCO Captioning Challenge 2015 and character language models, popularized by A. Karpathy. Proposed model is combination of convolutional and recurrent neural network with encoder--decoder architecture. Vector representing encoded image is passed to language model as memory values of LSTM layers in the network. This work investigate, whether model with such simple architecture is able to generate captions and how good it is in comparison to other contemporary solutions. One of the results is that the proposed architecture is not sufficient for any image captioning task.

Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.